212 research outputs found

    Assessing the impact of biomedical research in academic institutions of disparate sizes

    Get PDF
    Abstract Background The evaluation of academic research performance is nowadays a priority issue. Bibliometric indicators such as the number of publications, total citation counts and h-index are an indispensable tool in this task but their inherent association with the size of the research output may result in rewarding high production when evaluating institutions of disparate sizes. The aim of this study is to propose an indicator that may facilitate the comparison of institutions of disparate sizes. Methods The Modified Impact Index (MII) was defined as the ratio of the observed h-index (h) of an institution over the h-index anticipated for that institution on average, given the number of publications (N) it produces i.e. (α and β denote the intercept and the slope, respectively, of the line describing the dependence of the h-index on the number of publications in log10 scale). MII values higher than 1 indicate that an institution performs better than the average, in terms of its h-index. Data on scientific papers published during 2002–2006 and within 36 medical fields for 219 Academic Medical Institutions from 16 European countries were used to estimate α and β and to calculate the MII of their total and field-specific production. Results From our biomedical research data, the slope β governing the dependence of h-index on the number of publications in biomedical research was found to be similar to that estimated in other disciplines (≈0.4). The MII was positively associated with the average number of citations/publication (r = 0.653, p Conclusion The MII should complement the use of h-index when comparing the research output of institutions of disparate sizes. It has a conceptual interpretation and, with the data provided here, can be computed for the total research output as well as for field-specific publication sets of institutions in biomedicine.</p

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Indicators for the Data Usage Index (DUI): an incentive for publishing primary biodiversity data through global information infrastructure

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A professional recognition mechanism is required to encourage expedited publishing of an adequate volume of 'fit-for-use' biodiversity data. As a component of such a recognition mechanism, we propose the development of the Data Usage Index (DUI) to demonstrate to data publishers that their efforts of creating biodiversity datasets have impact by being accessed and used by a wide spectrum of user communities.</p> <p>Discussion</p> <p>We propose and give examples of a range of 14 absolute and normalized biodiversity dataset usage indicators for the development of a DUI based on search events and dataset download instances. The DUI is proposed to include relative as well as species profile weighted comparative indicators.</p> <p>Conclusions</p> <p>We believe that in addition to the recognition to the data publisher and all players involved in the data life cycle, a DUI will also provide much needed yet novel insight into how users use primary biodiversity data. A DUI consisting of a range of usage indicators obtained from the GBIF network and other relevant access points is within reach. The usage of biodiversity datasets leads to the development of a family of indicators in line with well known citation-based measurements of recognition.</p

    On the correlation between bibliometric indicators and peer review: reply to Opthof and Leydesdorff

    Get PDF
    Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment

    Towards a new crown indicator: an empirical analysis

    Get PDF
    We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care

    Measuring co-authorship and networking-adjusted scientific impact

    Get PDF
    Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I1 for a single scientist as the number of authors who appear in at least I1 papers of the specific scientist. For a group of scientists or institution, In is defined as the number of authors who appear in at least In papers that bear the affiliation of the group or institution. I1 depends on the number of papers authored Np. The power exponent R of the relationship between I1 and Np categorizes scientists as solitary (R>2.5), nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. In similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure

    Self-citations at the meso and individual levels: effects of different calculation methods

    Get PDF
    This paper focuses on the study of self-citations at the meso and micro (individual) levels, on the basis of an analysis of the production (1994–2004) of individual researchers working at the Spanish CSIC in the areas of Biology and Biomedicine and Material Sciences. Two different types of self-citations are described: author self-citations (citations received from the author him/herself) and co-author self-citations (citations received from the researchers’ co-authors but without his/her participation). Self-citations do not play a decisive role in the high citation scores of documents either at the individual or at the meso level, which are mainly due to external citations. At micro-level, the percentage of self-citations does not change by professional rank or age, but differences in the relative weight of author and co-author self-citations have been found. The percentage of co-author self-citations tends to decrease with age and professional rank while the percentage of author self-citations shows the opposite trend. Suppressing author self-citations from citation counts to prevent overblown self-citation practices may result in a higher reduction of citation numbers of old scientists and, particularly, of those in the highest categories. Author and co-author self-citations provide valuable information on the scientific communication process, but external citations are the most relevant for evaluative purposes. As a final recommendation, studies considering self-citations at the individual level should make clear whether author or total self-citations are used as these can affect researchers differently

    Evaluating Research and Impact: A Bibliometric Analysis of Research by the NIH/NIAID HIV/AIDS Clinical Trials Networks

    Get PDF
    Evaluative bibliometrics uses advanced techniques to assess the impact of scholarly work in the context of other scientific work and usually compares the relative scientific contributions of research groups or institutions. Using publications from the National Institute of Allergy and Infectious Diseases (NIAID) HIV/AIDS extramural clinical trials networks, we assessed the presence, performance, and impact of papers published in 2006–2008. Through this approach, we sought to expand traditional bibliometric analyses beyond citation counts to include normative comparisons across journals and fields, visualization of co-authorship across the networks, and assess the inclusion of publications in reviews and syntheses. Specifically, we examined the research output of the networks in terms of the a) presence of papers in the scientific journal hierarchy ranked on the basis of journal influence measures, b) performance of publications on traditional bibliometric measures, and c) impact of publications in comparisons with similar publications worldwide, adjusted for journals and fields. We also examined collaboration and interdisciplinarity across the initiative, through network analysis and modeling of co-authorship patterns. Finally, we explored the uptake of network produced publications in research reviews and syntheses. Overall, the results suggest the networks are producing highly recognized work, engaging in extensive interdisciplinary collaborations, and having an impact across several areas of HIV-related science. The strengths and limitations of the approach for evaluation and monitoring research initiatives are discussed
    corecore